Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Stack Overflow是最受欢迎的编程社区之一,开发人员可以为他们遇到的问题寻求帮助。然而,如果没有经验的开发人员无法清楚地描述他们的问题,那么他们很难吸引足够的关注并获得预期的答案。我们提出了M $ _3 $ NSCT5,这是一种自动从给定代码片段生成多个帖子标题的新颖方法。开发人员可以使用生成的标题查找密切相关的帖子并完成其问题描述。 M $ _3 $ NSCT5使用Codet5骨干,这是一种具有出色语言理解和发电能力的预训练的变压器模型。为了减轻歧义问题,即在不同背景下可以将相同的代码片段与不同的标题保持一致,我们提出了最大的边缘多元核抽样策略,以一次产生多个高质量和不同的标题候选者,以便开发人员选择。我们构建了一个大规模数据集,其中包含890,000个问题帖子,其中涵盖了八种编程语言,以验证M $ _3 $ NSCT5的有效性。 BLEU和胭脂指标的自动评估结果表明,M $ _3 $ NSCT5的优势比六个最先进的基线模型。此外,具有值得信赖结果的人类评估也证明了我们对现实世界应用方法的巨大潜力。
translated by 谷歌翻译
合成健康数据在共享数据以支持生物医学研究和创新医疗保健应用的发展时有可能减轻隐私问题。基于机器学习,尤其是生成对抗网络(GAN)方法的现代方法生成的现代方法继续发展并表现出巨大的潜力。然而,缺乏系统的评估框架来基准测试方法,并确定哪些方法最合适。在这项工作中,我们引入了一个可推广的基准测试框架,以评估综合健康数据的关键特征在实用性和隐私指标方面。我们将框架应用框架来评估来自两个大型学术医疗中心的电子健康记录(EHRS)数据的合成数据生成方法。结果表明,共享合成EHR数据存在公用事业私人关系权衡。结果进一步表明,在每个用例中,在所有标准上都没有明确的方法是最好的,这使得为什么需要在上下文中评估合成数据生成方法。
translated by 谷歌翻译
在“知识图”(kgs)的表示领域中,超级关系的事实由主要三重和几个辅助属性描述组成,这被认为比基于三重的事实更全面,更具体。但是,由于代表实体之间的隶属关系的层次结构削弱,因此,单个视图中现有的超相关KG嵌入方法受到限制。为了打破这一限制,我们提出了一个双视性超相关kg(DH-kg)结构,该结构包含实体的超相关实例视图,以及对从实体到共同模型超相关的概念的超相关本体论视图和分层信息。在本文中,我们首先定义了DH-KG上的链接预测和实体键入任务,并根据医疗数据构建了两个DH-KG数据集,即从Wikidata和HTDM中提取的JW44K-6K。此外,我们根据Gran编码器,HGNN和联合学习提出了DH-KG嵌入模型DHGE。实验结果表明,DHGE在DH-KG上的表现优于基线模型。我们还提供了该技术在高血压药物领域中应用的示例。我们的模型和数据集公开可用。
translated by 谷歌翻译
为了减轻从头开始构建知识图(kg)的挑战,更一般的任务是使用开放式语料库中的三元组丰富一个kg,那里获得的三元组包含嘈杂的实体和关系。在保持知识代表的质量的同时,以新收获的三元组丰富一个公园,这是一项挑战。本文建议使用从附加语料库中收集的信息来完善kg的系统。为此,我们将任务制定为两个耦合子任务,即加入事件提取(JEE)和知识图融合(KGF)。然后,我们提出了一个协作知识图融合框架,以允许我们的子任务以交替的方式相互协助。更具体地说,探险家执行了由地面注释和主管提供的现有KG监督的JEE。然后,主管评估了探险家提取的三元组,并用高度排名的人来丰富KG。为了实施此评估,我们进一步提出了一种翻译的关系一致性评分机制,以对齐并将提取的三元组对齐为先前的kg。实验验证了这种合作既可以提高JEE和KGF的表现。
translated by 谷歌翻译
实现通用语言情报是自然语言处理的长期目标,标准评估基准发挥基本和指导作用。我们认为,对于通用语言智能评估,基准本身需要全面和系统。为此,我们提出了Cuge,一种中文语言理解和生成评估基准,具有以下特征:(1)分层基准框架,其中数据集主要选择和组织语言能力 - 任务数据集层次结构。 (2)多级评分策略,其中基于分层框架提供了不同级别的模型性能。为了促进CUGE,我们提供了一个公共排行榜,可以自定义,以支持灵活的模型判断标准。代表性预先训练的语言模型的评估结果表明了对通用语言智能的完善的充足空间。 Cuge在Cuge.baai.ac.cn上公开提供。
translated by 谷歌翻译
表格数据在现实世界中普遍存在。虽然许多常用的神经组件(例如,卷积)和可扩展的神经网络(例如,Reset)已经由机器学习界开发,但是很少有人对表格数据有效,并且对于表格数据结构,很少有设计。在本文中,我们为表格数据提出了一种新颖且灵活的神经组件,称为抽象层(ABStlay),其学会明确地统一相关输入功能并为语义抽象生成更高级别的功能。此外,我们设计了一种结构重新参数化方法来压缩ABStlay,从而通过参考阶段的清晰边际降低计算复杂度。使用ABStlays建立了一个特殊的基本块,我们通过堆叠这些块来构建用于表格数据分类和回归的深度抽象网络(DANET)系列。在DANET中,引入特殊的快捷路径以从原始表格特征获取信息,协助不同级别的功能交互。七个现实世界的表格数据集的综合实验表明,我们的ABStlay和Danets对表格数据分类和回归有效,并且计算复杂性优于竞争方法。此外,我们评估DANET的性能收益,因为它深入了解我们的方法的可扩展性。我们的代码可在https://github.com/whatashot/danet上获得。
translated by 谷歌翻译
最近联合学习(FL)范式的潜在假设是本地模型通常与全局模型共享与全局模型相同的网络架构,这对于具有不同的硬件和基础架构的移动和IOT设备变得不切实际。可扩展的联合学习框架应该解决配备不同计算和通信功能的异构客户端。为此,本文提出了一种新的联合模型压缩框架,它将异构低级模型分配给客户端,然后将它们聚合到全局全级模型中。我们的解决方案使得能够培训具有不同计算复杂性的异构本地模型,并汇总单个全局模型。此外,FEDHM不仅降低了设备的计算复杂性,而且还通过使用低秩模型来降低通信成本。广泛的实验结果表明,我们提出的\ System在测试顶-1精度(平均精度4.6%的精度增益)方面优于现行修剪的液体方法,在各种异构流域下较小的型号尺寸(平均较小为1.5倍) 。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译